Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Dam defect object detection method based on improved single shot multibox detector
CHEN Jing, MAO Yingchi, CHEN Hao, WANG Longbao, WANG Zicheng
Journal of Computer Applications    2021, 41 (8): 2366-2372.   DOI: 10.11772/j.issn.1001-9081.2020101603
Abstract303)      PDF (1651KB)(329)       Save
In order to improve the efficiency of dam safety operation and maintenance, the dam defect object detection models can help to assist inspectors in defect detection. There is variability of the geometric shapes of dam defects, and the Single Shot MultiBox Detector (SSD) model using traditional convolution methods for feature extraction cannot adapt to the geometric transformation of defects. Focusing on the above problem, a DeFormable convolution Single Shot multi-box Detector (DFSSD) was proposed. Firstly, in the backbone network of the original SSD:Visual Geometry Group (VGG16), the standard convolution was replaced by the deformable convolution, which was used to deal with the geometric transformation of defects, and the model's spatial information modeling ability was increased by learning the convolution offset. Secondly, according to the sizes of different features, the ratio of the prior bounding box was improved to prompt the detection accuracy of the model to the bar feature and the model's generalization ability. Finally, in order to solve the problem of unbalanced positive and negative samples in the training set, an improved Non-Maximum Suppression (NMS) algorithm was adopted to optimize the learning effect. Experimental results show that the average detection accuracy of DFSSD is improved by 5.98% compared to the benchmark model SSD on dam defect images. By comparing with Faster Region-based Convolutional Neural Network (Faster R-CNN) and SSD models, it can be seen that DFSSD model has a better effect in improving the detection accuracy of dam defect objects.
Reference | Related Articles | Metrics
Multi-user task offloading strategy based on stable allocation
MAO Yingchi, XU Xuesong, LIU Pengfei
Journal of Computer Applications    2021, 41 (3): 786-793.   DOI: 10.11772/j.issn.1001-9081.2020060861
Abstract336)      PDF (1162KB)(966)       Save
With the emergence of many computation-intensive applications, mobile devices cannot meet user requirements such as delay and energy consumption due to their limited computing capabilities. Mobile Edge Computing (MEC) offloads user task computing to the MEC server through a wireless channel to significantly reduce the response delay and energy consumption of tasks. Concerning the problem of multi-user task offloading, a Multi-User Task Offloading strategy based on Stable Allocation (MUTOSA) was proposed to minimize energy consumption while ensuring the user delay requirement. Firstly, based on the comprehensive consideration of delay and energy consumption, the problem of multi-user task offloading in the independent task scenario was modeled. Then, based on the idea of delayed reception in the stable allocation of game theory, an adjustment strategy was proposed. Finally, the problem of multi-user task unloading was solved through continuous iteration. Experimental results show that, compared with the benchmark strategy and heuristic strategy, the proposed strategy can meet the delay requirements of more users, increase user satisfaction by about 10% on average, and reduce the total energy consumption of user devices by about 50%. It shows that the proposed strategy can effectively reduce energy consumption with ensuring the user delay requirement, and can effectively improve the user experience for delay-sensitive applications.
Reference | Related Articles | Metrics
CNN model compression based on activation-entropy based layer-wise iterative pruning strategy
CHEN Chengjun, MAO Yingchi, WANG Yichao
Journal of Computer Applications    2020, 40 (5): 1260-1265.   DOI: 10.11772/j.issn.1001-9081.2019111977
Abstract311)      PDF (718KB)(382)       Save

Since the existing pruning strategies of the Convolutional Neural Network (CNN) model are different and have general effects, an Activation-Entropy based Layer-wise Iterative Pruning (AE-LIP) strategy was proposed to reduce the parameter amount of the model while ensuring the accuracy of the model within a controllable range. Firstly, combined with the neuronal activation value and information entropy, a weight evaluation criteria based on activation-entropy was constructed, and the weight importance score was calculated. Secondly, the pruning was performed layer by layer, the weights were sorted according to the importance score, and the pruning number in each layer was combined to filter out the weights to be pruned and set them to zero. Finally, the model was fine-tuned, and the above process was repeated until the iteration ended. The experimental results show that the activation-entropy based layer-wise iterative pruning strategy makes the AlexNet model compressed 87.5%, and the corresponding accuracy is reduced by 2.12 percentage points, which is 1.54 percentage points higher than that of the magnitude-based weight pruning strategy and 0.91 percentage points higher than that of the correlation-based weight pruning strategy; the strategy makes VGG-16 model compressed 84.1%, and the corresponding accuracy is reduced by 2.62 percentage points, which is 0.62 and 0.27 percentage points higher than those of the two above strategies. It can be seen that the proposed strategy reduces the size of the CNN model effectively while ensuring the accuracy of the model, and is helpful for the deployment of CNN model on mobile devices with limited storage.

Reference | Related Articles | Metrics
Trajectory prediction based on Gauss mixture time series model
GAO Jian, MAO Yingchi, LI Zhitao
Journal of Computer Applications    2019, 39 (8): 2261-2270.   DOI: 10.11772/j.issn.1001-9081.2019010030
Abstract1081)      PDF (1517KB)(451)       Save
Considering the large change of trajectory prediction error caused by the change of road traffic flow at different time, a Gauss Mixture Time Series Model (GMTSM) based on probability distribution model was proposed. The model regression of mass vehicle historical trajectories and the analysis of road traffic flow were carried out to realize vehicle trajectory prediction. Firstly, aiming at the problem that the uniform grid partition method was easy to cause the splitting of related trajectory points, an iterative grid partition method was proposed to realize the quantity balance of trajectory points. Secondly, Gaussian Mixture Model (GMM) and AutoRegressive Integrated Moving Average model (ARIMA) in time series analysis were trained and combined together. Thirdly, in order to avoid the interference of the instability of GMTSM hybrid model's sub-models on the prediction results, the weights of sub-models were dynamically calculated by analyzing the prediction errors of the sub-models. Finally, based on the dynamic weight, the sub-models were combined together to realize trajectory prediction. Experimental results show that the average prediction accuracy of GMTSM is 92.3% in the case of sudden change of road traffic flow. Compared with Gauss mixed model and Markov model under the same parameters, GMTSM has prediction accuracy increased by about 55%. GMTSM can not only accurately predict vehicle trajectory under normal circumstances, but also effectively improve the accuracy of trajectory prediction under road traffic flow changes, which is applicable to the real road environment.
Reference | Related Articles | Metrics
Feature selection based on maximum conditional and joint mutual information
MAO Yingchi, CAO Hai, PING Ping, LI Xiaofang
Journal of Computer Applications    2019, 39 (3): 734-741.   DOI: 10.11772/j.issn.1001-9081.2018081694
Abstract1041)      PDF (1284KB)(437)       Save
In the analysis process of high-dimensional data such as image data, genetic data and text data, when samples have redundant features, the complexity of the problem is greatly increased, so it is important to reduce redundant features before data analysis. The feature selection based on Mutual Information (MI) can reduce the data dimension and improve the accuracy of the analysis results, but the existing feature selection methods cannot reasonably eliminate the redundant features because of the single standard. To solve the problem, a feature selection method based on Maximum Conditional and Joint Mutual Information (MCJMI) was proposed. Joint mutual information and conditional mutual information were both considered when selecting features with MCJMI, improving the feature selection constraint. Exerimental results show that the detection accuracy is improved by 6% compared with Information Gain (IG) and minimum Redundancy Maximum Relevance (mRMR) feature selection; 2% compared with Joint Mutual Information (JMI) and Joint Mutual Information Maximisation (JMIM); and 1% compared with LW index with Sequence Forward Search algorithm (SFS-LW). And the stability of MCJMI reaches 0.92, which is better than JMI, JMIM and SFS-LW. In summary the proposed method can effectively improve the accuracy and stability of feature selection.
Reference | Related Articles | Metrics
Task assignment method based on cloud-fog cooperative model
LIU Pengfei, MAO Yingchi, WANG Longbao
Journal of Computer Applications    2019, 39 (1): 8-14.   DOI: 10.11772/j.issn.1001-9081.2018071642
Abstract722)      PDF (1133KB)(353)       Save

To realize reasonable allocation and scheduling of mobile user task requests under cloud and fog collaboration, a task assignment algorithm based on cloud-fog collaboration model, named IGA (Improved Genetic Algorithm), was proposed. Firstly, individuals were coded in the way of mixed coding, and initial population was generated randomly. Secondly, the objective function was set as the cost of service providers. Then select, cross, and mutate were used to produce new qualified individuals. Finally, the request type in a chromosome was assigned to the corresponding resource node and iteration counter was updated until the iteration was completed. The simulation results show that compared with traditional cloud model, cloud-frog collaboration model reduces the time delay by nearly 30 seconds, reduces Service Level Objective (SLO) violation rate by nearly 10%, and reduces the cost of service providers.

Reference | Related Articles | Metrics
Cloud resource scheduling method based on combinatorial double auction
MAO Yingchi, HAO Shuai, PING Ping, QI Rongzhi
Journal of Computer Applications    2019, 39 (1): 1-7.   DOI: 10.11772/j.issn.1001-9081.2018071614
Abstract583)      PDF (1103KB)(427)       Save

Aiming at the resource scheduling problem across data centers, a Priority Combinatorial Double Auction (PCDA) resource scheduling scheme was proposed. Firstly, cloud resource auction was divided into three parts:cloud user agent bidding, cloud resource provider bid, auction agent organization auction. Secondly, on the basis of defining user priority and task urgency, the violation of Service Level Agreement (SLA) of each job during auction was estimated and the revenue of cloud provider was calculated. At the same time, a number of transactions were allowed in each round of bidders. Finally, reasonable allocation of cloud resource scheduling according to user level could be achieved. The simulation results show that the algorithm guarantees the success rate of auction. Compared with traditional auction, PCDA reduces energy consumption by 35.00% and the profit of auction cloud provider is about 38.84%.

Reference | Related Articles | Metrics
Multi-type task assignment and scheduling oriented to spatial crowdsourcing
MAO Yingchi, MU Chao, BAO Wei, LI Xiaofang
Journal of Computer Applications    2018, 38 (1): 6-12.   DOI: 10.11772/j.issn.1001-9081.2017071886
Abstract552)      PDF (1060KB)(447)       Save
Aiming at the quality and quantity problem of multi-type task completion in spatial crowdsourcing, a method of multi-type task assignment and scheduling was proposed. Firstly, in the task assignment process, by combining with the characteristics of multi-type tasks and users in spatial crowdsourcing and improving the greedy allocation algorithm, a Distance ε based Assignment ( ε-DA) algorithm was proposed. Then the tasks were assigned to the nearby user, in order to improve the quality of task completion. Secondly, the idea of Branch and Bound Schedule (BBS) was utilized, and the task sequences were scheduled according to the size of the professional matching scores. Finally, the best sequence of tasks was found. Due to the low running speed of the scheduling algorithm of branch and bound idea, the Most Promising Branch Heuristic (MPBH) algorithm was presented. Through the MPBH algorithm, local optimization was achieved in each task allocation process. Compared with the scheduling algorithm of branch and bound idea, the running speed of the proposed algorithm was increased by 30%. The experimental results show that the proposed method can improve the quality and quantity of task completion and raise the running speed and accuracy.
Reference | Related Articles | Metrics
M-TAEDA: temporal abnormal event detection algorithm for multivariate time-series data of water quality
MAO Yingchi, QI Hai, JIE Qing, WANG Longbao
Journal of Computer Applications    2017, 37 (1): 138-144.   DOI: 10.11772/j.issn.1001-9081.2017.01.0138
Abstract597)      PDF (1143KB)(554)       Save
The real-time time-series data of multiple water parameters are acquired via the water sensor networks deployed in the water supply network. The accurate and efficient detection and warning of pollution events to prevent pollution from spreading is one of the most important issues when the pollution occurs. In order to comprehensively evaluate the abnormal event detection to reduce the detection deviation, a Temproal Abnormal Event Detection Algorithm for Multivariate time series data (M-TAEDA) was proposed. In M-TAEDA, it could analyze the time-series data of multiple parameters with BP (Back Propagation) model to determine the possible outliers, respectively. M-TAEDA algorithm could detect the potential pollution events through Bayesian sequential analysis to estimate the probability of an abnormal event. Finally, it can make decision through the multiple event probability fusion in the water supply systems. The experimental results indicate that the proposed M-TAEDA algorithm can get the 90% accuracy with BP model and improve the rate of detection about 40% and reduce the false alarm rate about 45% compared with the temporal abnormal event detection of Single-Variate Temproal Abnormal Event Detection Algorithm (S-TAEDA).
Reference | Related Articles | Metrics
Moving target tracking scheme based on dynamic clustering
BAO Wei, MAO Yingchi, WANG Longbao, CHEN Xiaoli
Journal of Computer Applications    2017, 37 (1): 65-72.   DOI: 10.11772/j.issn.1001-9081.2017.01.0065
Abstract698)      PDF (1185KB)(436)       Save
Focused on the issues of low accuracy, high energy consumption of target tracking network and short life cycle of network in Wireless Sensor Network (WSN), the moving target tracking technology based on dynamic clustering was proposed. Firstly, a Two-Ring Dynamic Clustering (TRDC) structure and the corresponding TRDC updating methods were proposed; secondly, based on centroid localization, considering energy of node, the Centroid Localization based on Power-Level (CLPL) algorithm was proposed; finally, in order to further reduce the energy consumption of the network, the CLPL algorithm was improved, and the random localization algorithm was proposed. The simulation results indicate that compared with static cluster, the life cycle of network increased by 22.73%; compared with acyclic cluster, the loss rate decreased by 40.79%; there was a little difference from Received Signal Strength Indicator (RSSI) algorithm in accuracy. The proposed technology can effectively ensure tracking accuracy and reduce energy consumption and loss rate.
Reference | Related Articles | Metrics
Online abnormal event detection with spatio-temporal relationship in river networks
MAO Yingchi, JIE Qing, CHEN Hao
Journal of Computer Applications    2015, 35 (11): 3106-3111.   DOI: 10.11772/j.issn.1001-9081.2015.11.3106
Abstract471)      PDF (1073KB)(428)       Save
When the network abnormal event occurs, the spatial-temporal correlation of the sensor nodes is very obvious. While existing methods generally separate time and space data properties, a decentralized algorithm of spatial-temporal abnormal detection based on Probabilistic Graphical Model (PGM) was proposed. Firstly the Connected Dominating Set (CDS) algorithm was used to select part of the sensor nodes to avoid monitoring all the sensor nodes, and then Markov Chain (MC) was used to predict time exception event, at last Bayesian Network (BN) was utilized in modelling the spatial dependency of sensors, combining spatio-temporal events to predict whether the abnormal events would or would not occur. Compared with the simple threshold algorithm and BN algorithm, the experimental results demonstrate that the proposed algorithm has higher detection precision, and low delay rate, greatly reducing the communication overhead and improving the response speed.
Reference | Related Articles | Metrics
Related task scheduling algorithm based on task hierarchy and time constraint in cloud computing
CHEN Xi MAO Yingchi JIE Qing ZHU Lili
Journal of Computer Applications    2014, 34 (11): 3069-3072.   DOI: 10.11772/j.issn.1001-9081.2014.11.3069
Abstract274)      PDF (588KB)(739)       Save

Concerning the delay of related task scheduling in cloud computing, a Related Task Scheduling algorithm based on Task Hierarchy and Time Constraint (RTS-THTC) was proposed. The related tasks and task execution order were represented by Directed Acyclic Graph (DAG), and the task execution concurrency was improved by using the proposed hierarchical task model. Through the calculation of the total time constraint in each task layer, the tasks were dispatched to the resource with the minimum execution time. The experimental results demonstrate that the proposed RTS-THTC algorithm can achieve better performance than Heterogeneous Earliest-Finish-Time (HEFT) algorithm in the terms of the total execution time and task delay.

Reference | Related Articles | Metrics